AAAI.2021 - Search and Optimization

Total: 28

#1 A Fast Exact Algorithm for the Resource Constrained Shortest Path Problem [PDF] [Copy] [Kimi]

Authors: Saman Ahmadi ; Guido Tack ; Daniel D. Harabor ; Philip Kilby

Resource constrained path finding is a well studied topic in AI, with real-world applications in different areas such as transportation and robotics. This paper introduces several heuristics in the resource constrained path finding context that significantly improve the algorithmic performance of the initialisation phase and the core search. We implement our heuristics on top of a bidirectional A* algorithm and evaluate them on a set of large instances. The experimental results show that, for the first time in the context of constrained path finding, our fast and enhanced algorithm can solve all of the benchmark instances to optimality, and compared to the state of the art algorithms, it can improve existing runtimes by up to four orders of magnitude on large-size network graphs.

#2 Generalization in Portfolio-Based Algorithm Selection [PDF] [Copy] [Kimi]

Authors: Maria-Florina Balcan ; Tuomas Sandholm ; Ellen Vitercik

Portfolio-based algorithm selection has seen tremendous practical success over the past two decades. This algorithm configuration procedure works by first selecting a portfolio of diverse algorithm parameter settings, and then, on a given problem instance, using an algorithm selector to choose a parameter setting from the portfolio with strong predicted performance. Oftentimes, both the portfolio and the algorithm selector are chosen using a training set of typical problem instances from the application domain at hand. In this paper, we provide the first provable guarantees for portfolio-based algorithm selection. We analyze how large the training set should be to ensure that the resulting algorithm selector's average performance over the training set is close to its future (expected) performance. This involves analyzing three key reasons why these two quantities may diverge: 1) the learning-theoretic complexity of the algorithm selector, 2) the size of the portfolio, and 3) the learning-theoretic complexity of the algorithm's performance as a function of its parameters. We introduce an end-to-end learning-theoretic analysis of the portfolio construction and algorithm selection together. We prove that if the portfolio is large, overfitting is inevitable, even with an extremely simple algorithm selector. With experiments, we illustrate a tradeoff exposed by our theoretical analysis: as we increase the portfolio size, we can hope to include a well-suited parameter setting for every possible problem instance, but it becomes impossible to avoid overfitting.

#3 Combining Preference Elicitation with Local Search and Greedy Search for Matroid Optimization [PDF] [Copy] [Kimi]

Authors: Nawal Benabbou ; Cassandre Leroy ; Thibaut Lust ; Patrice Perny

We propose two incremental preference elicitation methods for interactive preference-based optimization on weighted matroid structures. More precisely, for linear objective (utility) functions, we propose an interactive greedy algorithm interleaving preference queries with the incremental construction of an independent set to obtain an optimal or near-optimal base of a matroid. We also propose an interactive local search algorithm based on sequences of possibly improving exchanges for the same problem. For both algorithms, we provide performance guarantees on the quality of the returned solutions and the number of queries. Our algorithms are tested on the uniform, graphical and scheduling matroids to solve three different problems (committee election, spanning tree, and scheduling problems) and evaluated in terms of computation times, number of queries, and empirical error.

#4 f-Aware Conflict Prioritization & Improved Heuristics For Conflict-Based Search [PDF] [Copy] [Kimi]

Authors: Eli Boyarski ; Ariel Felner ; Pierre Le Bodic ; Daniel D. Harabor ; Peter J. Stuckey ; Sven Koenig

Conflict-Based Search (CBS) is a leading two-level algorithm for optimal Multi-Agent Path Finding (MAPF). The main step of CBS is to expand nodes by resolving conflicts (where two agents collide). Choosing the ‘right’ conflict to resolve can greatly speed up the search. CBS first resolves conflicts where the costs (g-values) of the resulting child nodes are larger than the cost of the node to be split. However, the recent addition of high-level heuristics to CBS and expanding nodes according to f=g+h reduces the relevance of this conflict prioritization method. Therefore, we introduce an expanded categorization of conflicts, which first resolves conflicts where the f-values of the child nodes are larger than the f-value of the node to be split, and present a method for identifying such conflicts. We also enhance all known heuristics for CBS by using information about the cost of resolving certain conflicts, and with only a small computational overhead. Finally, we experimentally demonstrate that both the expanded categorization of conflicts and the improved heuristics contribute to making CBS even more efficient.

#5 Parameterized Algorithms for MILPs with Small Treedepth [PDF] [Copy] [Kimi]

Authors: Cornelius Brand ; Martin Koutecký ; Sebastian Ordyniak

Solving (mixed) integer (linear) programs, (M)I(L)Ps for short, is a fundamental optimisation task with a wide range of applications in artificial intelligence and computer science in general. While hard in general, recent years have brought about vast progress for solving structurally restricted, (non-mixed) ILPs: n-fold, tree-fold, 2-stage stochastic and multi-stage stochastic programs admit efficient algorithms, and all of these special cases are subsumed by the class of ILPs of small treedepth. In this paper, we extend this line of work to the mixed case, by showing an algorithm solving MILP in time f(a,d)poly(n), where a is the largest coefficient of the constraint matrix, d is its treedepth, and n is the number of variables. This is enabled by proving bounds on the denominators (fractionality) of the vertices of bounded-treedepth (non-integer) linear programs. We do so by carefully analysing the inverses of invertible sub-matrices of the constraint matrix. This allows us to afford scaling up the mixed program to the integer grid, and applying the known methods for integer programs. We then trace the limiting boundary of our "bounded fractionality" approach both in terms of going beyond MILP (by allowing non-linear objectives) as well as its usefulness for generalising other important known tractable classes of ILP. On the positive side, we show that our result can be generalised from MILP to MIP with piece-wise linear separable convex objectives with integer breakpoints. On the negative side, we show that going even slightly beyond such objectives or considering other natural related tractable classes of ILP leads to unbounded fractionality. Finally, we show that restricting the structure of only the integral variables in the constraint matrix does not yield tractable special cases.

#6 NuQClq: An Effective Local Search Algorithm for Maximum Quasi-Clique Problem [PDF] [Copy] [Kimi]

Authors: Jiejiang Chen ; Shaowei Cai ; Shiwei Pan ; Yiyuan Wang ; Qingwei Lin ; Mengyu Zhao ; Minghao Yin

The maximum quasi-clique problem (MQCP) is an important extension of maximum clique problem with wide applications. Recent heuristic MQCP algorithms can hardly solve large and hard graphs effectively. This paper develops an efficient local search algorithm named NuQClq for the MQCP, which has two main ideas. First, we propose a novel vertex selection strategy, which utilizes cumulative saturation information to be a selection criterion when the candidate vertices have equal values on the primary scoring function. Second, a variant of configuration checking named BoundedCC is designed by setting an upper bound for the threshold of forbidding strength. When the threshold value of vertex exceeds the upper bound, we reset its threshold value to increase the diversity of search process. Experiments on a broad range of classic benchmarks and sparse instances show that NuQClq significantly outperforms the state-of-the-art MQCP algorithms for most instances.

#7 Symmetry Breaking for k-Robust Multi-Agent Path Finding [PDF] [Copy] [Kimi]

Authors: Zhe Chen ; Daniel D. Harabor ; Jiaoyang Li ; Peter J. Stuckey

During Multi-Agent Path Finding (MAPF) problems, agentscan be delayed by unexpected events. To address suchsituations recent work describes k-Robust Conflict-BasedSearch (k-CBS): an algorithm that produces coordinated andcollision-free plan that is robust for up tokdelays. In thiswork we introducing a variety of pairwise symmetry break-ing constraints, specific tok-robust planning, that can effi-ciently find compatible and optimal paths for pairs of con-flicting agents. We give a thorough description of the newconstraints and report large improvements to success rate ina range of domains including: (i) classic MAPF benchmarks;(ii) automated warehouse domains and; (iii) on maps fromthe 2019 Flatland Challenge, a recently introduced railwaydomain wherek-robust planning can be fruitfully applied toschedule trains.

#8 Escaping Local Optima with Non-Elitist Evolutionary Algorithms [PDF] [Copy] [Kimi]

Authors: Duc-Cuong Dang ; Anton Eremeev ; Per Kristian Lehre

Most discrete evolutionary algorithms (EAs) implement elitism, meaning that they make the biologically implausible assumption that the fittest individuals never die. While elitism favours exploitation and ensures that the best seen solutions are not lost, it has been widely conjectured that non-elitism is necessary to explore promising fitness valleys without getting stuck in local optima. Determining when non-elitist EAs outperform elitist EAs has been one of the most fundamental open problems in evolutionary computation. A recent analysis of a non-elitist EA shows that this algorithm does not outperform its elitist counterparts on the benchmark problem JUMP. We solve this open problem through rigorous runtime analysis of elitist and non-elitist population-based EAs on a class of multi-modal problems. We show that with 3-tournament selection and appropriate mutation rates, the non-elitist EA optimises the multi-modal problem in expected polynomial time, while an elitist EA requires exponential time with overwhelmingly high probability. A key insight in our analysis is the non-linear selection profile of the tournament selection mechanism which, with appropriate mutation rates, allows a small sub-population to reside on the local optimum while the rest of the population explores the fitness valley. In contrast, we show that the comma-selection mechanism which does not have this non-linear profile, fails to optimise this problem in polynomial time. The theoretical analysis is complemented with an empirical investigation on instances of the set cover problem, showing that non-elitist EAs can perform better than the elitist ones. We also provide examples where usage of mutation rates close to the error thresholds is beneficial when employing non-elitist population-based EAs.

#9 Pareto Optimization for Subset Selection with Dynamic Partition Matroid Constraints [PDF] [Copy] [Kimi]

Authors: Anh Viet Do ; Frank Neumann

In this study, we consider the subset selection problems with submodular or monotone discrete objective functions under partition matroid constraints where the thresholds are dynamic. We focus on POMC, a simple Pareto optimization approach that has been shown to be effective on such problems. Our analysis departs from singular constraint problems and extends to problems of multiple constraints. We show that previous results of POMC's performance also hold for multiple constraints. Our experimental investigations on random undirected maxcut problems demonstrate POMC's competitiveness against the classical GREEDY algorithm with restart strategy.

#10 Theoretical Analyses of Multi-Objective Evolutionary Algorithms on Multi-Modal Objectives [PDF] [Copy] [Kimi]

Authors: Benjamin Doerr ; Weijie Zheng

Previous theory work on multi-objective evolutionary algorithms considers mostly easy problems that are composed of unimodal objectives. This paper takes a first step towards a deeper understanding of how evolutionary algorithms solve multi-modal multi-objective problems. We propose the OneJumpZeroJump problem, a bi-objective problem whose single objectives are isomorphic to the classic jump functions benchmark. We prove that the simple evolutionary multi-objective optimizer (SEMO) cannot compute the full Pareto front. In contrast, for all problem sizes n and all jump sizes k in [4..n/2-1], the global SEMO (GSEMO) covers the Pareto front in Θ((n-2k)n^k) iterations in expectation. To improve the performance, we combine the GSEMO with two approaches, a heavy-tailed mutation operator and a stagnation detection strategy, that showed advantages in single-objective multi-modal problems. Runtime improvements of asymptotic order at least k^Ω(k) are shown for both strategies. Our experiments verify the substantial runtime gains already for moderate problem sizes. Overall, these results show that the ideas recently developed for single-objective evolutionary algorithms can be effectively employed also in multi-objective optimization.

#11 Multi-Objective Submodular Maximization by Regret Ratio Minimization with Theoretical Guarantee [PDF] [Copy] [Kimi]

Authors: Chao Feng ; Chao Qian

Submodular maximization has attracted much attention due to its wide application and attractive property. Previous works mainly considered one single objective function, while there can be multiple ones in practice. As the objectives are usually conflicting, there exists a set of Pareto optimal solutions, attaining different optimal trade-offs among multiple objectives. In this paper, we consider the problem of minimizing the regret ratio in multi-objective submodular maximization, which is to find at most k solutions to approximate the whole Pareto set as well as possible. We propose a new algorithm RRMS by sampling representative weight vectors and solving the corresponding weighted sums of objective functions using some given \alpha-approximation algorithm for single-objective submodular maximization. We prove that the regret ratio of the output of RRMS is upper bounded by 1-\alpha+O(\sqrt{d-1}\cdot(\frac{d}{k-d})^{\frac{1}{d-1}}), where d is the number of objectives. This is the first theoretical guarantee for the situation with more than two objectives. When d=2, it reaches the (1-\alpha+O(1/k))-guarantee of the only existing algorithm Polytope. Empirical results on the applications of multi-objective weighted maximum coverage and Max-Cut show the superior performance of RRMS over Polytope.

#12 Choosing the Initial State for Online Replanning [PDF] [Copy] [Kimi]

Authors: Maximilian Fickert ; Ivan Gavran ; Ivan Fedotov ; Jörg Hoffmann ; Rupak Majumdar ; Wheeler Ruml

The need to replan arises in many applications. However, in the context of planning as heuristic search, it raises an annoying problem: if the previous plan is still executing, what should the new plan search take as its initial state? If it were possible to accurately predict how long replanning would take, it would be easy to find the appropriate state at which control will transfer from the previous plan to the new one. But as planning problems can vary enormously in their difficulty, this prediction can be difficult. Many current systems merely use a manually chosen constant duration. In this paper, we show how such ad hoc solutions can be avoided by integrating the choice of the appropriate initial state into the search process itself. The search is initialized with multiple candidate initial states and a time-aware evaluation function is used to prefer plans whose total goal achievement time is minimal. Experimental results show that this approach yields better behavior than either guessing a constant or trying to predict replanning time in advance. By making replanning more effective and easier to implement, this work aids in creating planning systems that can better handle the inevitable exigencies of real-world execution.

#13 OpEvo: An Evolutionary Method for Tensor Operator Optimization [PDF] [Copy] [Kimi]

Authors: Xiaotian Gao ; Wei Cui ; Lintao Zhang ; Mao Yang

Training and inference efficiency of deep neural networks highly rely on the performance of tensor operators on hardware platforms. Manually optimizing tensor operators has limitations in terms of supporting new operators or hardware platforms. Therefore, automatically optimizing device code configurations of tensor operators is getting increasingly attractive. However, current methods for tensor operator optimization usually suffer from poor sample-efficiency due to the combinatorial search space. In this work, we propose a novel evolutionary method, OpEvo, which efficiently explores the search spaces of tensor operators by introducing a topology-aware mutation operation based on q-random walk to leverage the topological structures over the search spaces. Our comprehensive experiment results show that compared with state-of-the-art(SOTA) methods OpEvo can find the best configuration with the lowest variance and least efforts in the number of trials and wall-clock time. All code of this work is available online.

#14 Efficient Bayesian Network Structure Learning via Parameterized Local Search on Topological Orderings [PDF] [Copy] [Kimi]

Authors: Niels Grüttemeier ; Christian Komusiewicz ; Nils Morawietz

In Bayesian Network Structure Learning (BNSL), we are given a variable set and parent scores for each variable and aim to compute a DAG, called Bayesian network, that maximizes the sum of parent scores, possibly under some structural constraints. Even very restricted special cases of BNSL are computationally hard, and, thus, in practice heuristics such as local search are used. In a typical local search algorithm, we are given some BNSL solution and ask whether there is a better solution within some pre-defined neighborhood of the solution. We study ordering-based local search, where a solution is described via a topological ordering of the variables. We show that given such a topological ordering, we can compute an optimal DAG whose ordering is within inversion distance r in subexponential FPT time; the parameter r allows to balance between solution quality and running time of the local search algorithm. This running time bound can be achieved for BNSL without any structural constraints and for all structural constraints that can be expressed via a sum of weights that are associated with each parent set. We show that for other modification operations on the variable orderings, algorithms with an FPT time for r are unlikely. We also outline the limits of ordering-based local search by showing that it cannot be used for common structural constraints on the moralized graph of the network.

#15 Enhancing Balanced Graph Edge Partition with Effective Local Search [PDF] [Copy] [Kimi]

Authors: Zhenyu Guo ; Mingyu Xiao ; Yi Zhou ; Dongxiang Zhang ; Kian-Lee Tan

Graph partition is a key component to achieve workload balance and reduce job completion time in parallel graph processing systems. Among the various partition strategies, edge partition has demonstrated more promising performance in power-law graphs than vertex partition and thereby has been more widely adopted as the default partition strategy by existing graph systems. The graph edge partition problem, which is to split the edge set into multiple balanced parts with the objective of minimizing the total number of copied vertices, has been widely studied from the view of optimization and algorithms. In this paper, we study local search algorithms for this problem to further improve the partition results from existing methods. More specifically, we propose two novel concepts, namely adjustable edges and blocks. Based on these, we develop a greedy heuristic as well as an improved search algorithm utilizing the property of max-flow model. To evaluate the performance of our algorithms, we first provide adequate theoretical analysis in terms of approximation quality. We significantly improve the previous known approximation ratio for this problem. Then we conduct extensive experiments on a large number of benchmark datasets and state-of-the-art edge partition strategies. The results show that our proposed local search framework can further improve the quality of graph partition by a wide margin.

#16 Submodular Span, with Applications to Conditional Data Summarization [PDF] [Copy] [Kimi]

Authors: Lilly Kumari ; Jeff Bilmes

As an extension to the matroid span problem, we propose the submodular span problem that involves finding a large set of elements with small gain relative to a given query set. We then propose a two-stage Submodular Span Summarization (S3) framework to achieve a form of conditional or query-focused data summarization. The first stage encourages the summary to be relevant to a given query set, and the second stage encourages the final summary to be diverse, thus achieving two important necessities for a good query-focused summary. Unlike previous methods, our framework uses only a single submodular function defined over both data and query. We analyze theoretical properties in the context of both matroids and polymatroids that elucidate when our methods should work well. We find that a scalable approximation algorithm to the polymatroid submodular span problem has good theoretical and empirical properties. We provide empirical and qualitative results on three real-world tasks: conditional multi-document summarization on the DUC 2005-2007 datasets, conditional video summarization on the UT-Egocentric dataset, and conditional image corpus summarization on the ImageNet dataset. We use deep neural networks, specifically a BERT model for text, AlexNet for video frames, and Bi-directional Generative Adversarial Networks (BiGAN) for ImageNet images to help instantiate the submodular functions. The result is a minimally supervised form of conditional summarization that matches or improves over the previous state-of-the-art.

#17 EECBS: A Bounded-Suboptimal Search for Multi-Agent Path Finding [PDF] [Copy] [Kimi]

Authors: Jiaoyang Li ; Wheeler Ruml ; Sven Koenig

Multi-Agent Path Finding (MAPF), i.e., finding collision-free paths for multiple robots, is important for many applications where small runtimes are necessary, including the kind of automated warehouses operated by Amazon. CBS is a leading two-level search algorithm for solving MAPF optimally. ECBS is a bounded-suboptimal variant of CBS that uses focal search to speed up CBS by sacrificing optimality and instead guaranteeing that the costs of its solutions are within a given factor of optimal. In this paper, we study how to decrease its runtime even further using inadmissible heuristics. Motivated by Explicit Estimation Search (EES), we propose Explicit Estimation CBS (EECBS), a new bounded-suboptimal variant of CBS, that uses online learning to obtain inadmissible estimates of the cost of the solution of each high-level node and uses EES to choose which high-level node to expand next. We also investigate recent improvements of CBS and adapt them to EECBS. We find that EECBS with the improvements runs significantly faster than the state-of-the-art bounded-suboptimal MAPF algorithms ECBS, BCP-7, and eMDD-SAT on a variety of MAPF instances. We hope that the scalability of EECBS enables additional applications for bounded-suboptimal MAPF algorithms.

#18 Correlation-Aware Heuristic Search for Intelligent Virtual Machine Provisioning in Cloud Systems [PDF] [Copy] [Kimi]

Authors: Chuan Luo ; Bo Qiao ; Wenqian Xing ; Xin Chen ; Pu Zhao ; Chao Du ; Randolph Yao ; Hongyu Zhang ; Wei Wu ; Shaowei Cai ; Bing He ; Saravanakumar Rajmohan ; Qingwei Lin

The optimization of resource is crucial for the operation of public cloud systems such as Microsoft Azure, as well as servers dedicated to the workloads of large customers such as Microsoft 365. Those optimization tasks often need to take unknown parameters into consideration and can be formulated as Prediction+Optimization problems. This paper proposes a new Prediction+Optimization method named Correlation-Aware Heuristic Search (CAHS) that is capable of accounting for the uncertainty in unknown parameters and delivering effective solutions to difficult optimization problems. We apply this method to solving the predictive virtual machine (VM) provisioning (PreVMP) problem, where the VM provisioning plans are optimized based on the predicted demands of different VM types, to ensure rapid provisions upon customers' requests and to pursue high resource utilization. Unlike the current state-of-the-art PreVMP approaches that assume independence among the demands for different VM types, CAHS incorporates demand correlation when conducting prediction and optimization in a novel and effective way. Our experiments on two public benchmarks and one industrial benchmark demonstrate that CAHS can achieve better performance than its nine state-of-the-art competitors. CAHS has been successfully deployed in Microsoft Azure and significantly improved its performance. The main ideas of CAHS have also been leveraged to improve the efficiency and the reliability of the cloud services provided by Microsoft 365.

#19 Single Player Monte-Carlo Tree Search Based on the Plackett-Luce Model [PDF] [Copy] [Kimi]

Authors: Felix Mohr ; Viktor Bengs ; Eyke Hüllermeier

The problem of minimal cost path search is especially difficult when no useful heuristics are available. A common solution is roll-out-based search like Monte Carlo Tree Search (MCTS). However, MCTS is mostly used in stochastic or adversarial environments, with the goal to identify an agent's best next move. For this reason, even though single player versions of MCTS exist, most algorithms, including UCT, are not directly tailored to classical minimal cost path search. We present Plackett-Luce MCTS (PL-MCTS), a path search algorithm based on a probabilistic model over the qualities of successor nodes. We empirically show that PL-MCTS is competitive and often superior to the state of the art.

#20 Policy-Guided Heuristic Search with Guarantees [PDF] [Copy] [Kimi]

Authors: Laurent Orseau ; Levi H. S. Lelis

The use of a policy and a heuristic function for guiding search can be quite effective in adversarial problems, as demonstrated by AlphaGo and its successors, which are based on the PUCT search algorithm. While PUCT can also be used to solve single-agent deterministic problems, it lacks guarantees on its search effort and it can be computationally inefficient in practice. Combining the A* algorithm with a learned heuristic function tends to work better in these domains, but A* and its variants do not use a policy. Moreover, the purpose of using A* is to find solutions of minimum cost, while we seek instead to minimize the search loss (e.g., the number of search steps). LevinTS is guided by a policy and provides guarantees on the number of search steps that relate to the quality of the policy, but it does not make use of a heuristic function. In this work we introduce Policy-guided Heuristic Search (PHS), a novel search algorithm that uses both a heuristic function and a policy and has theoretical guarantees on the search loss that relates to both the quality of the heuristic and of the policy. We show empirically on the sliding-tile puzzle, Sokoban, and a puzzle from the commercial game `The Witness' that PHS enables the rapid learning of both a policy and a heuristic function and compares favorably with A*, Weighted A*, Greedy Best-First Search, LevinTS, and PUCT in terms of number of problems solved and search time in all three domains tested.

#21 Deep Innovation Protection: Confronting the Credit Assignment Problem in Training Heterogeneous Neural Architectures [PDF] [Copy] [Kimi]

Authors: Sebastian Risi ; Kenneth O. Stanley

Deep reinforcement learning approaches have shown impressive results in a variety of different domains, however, more complex heterogeneous architectures such as world models require the different neural components to be trained separately instead of end-to-end. While a simple genetic algorithm recently showed end-to-end training is possible, it failed to solve a more complex 3D task. This paper presents a method called Deep Innovation Protection (DIP) that addresses the credit assignment problem in training complex heterogenous neural network models end-to-end for such environments. The main idea behind the approach is to employ multiobjective optimization to temporally reduce the selection pressure on specific components in multi-component network, allowing other components to adapt. We investigate the emergent representations of these evolved networks, which learn to predict properties important for the survival of the agent, without the need for a specific forward-prediction loss.

#22 Weighting-based Variable Neighborhood Search for Optimal Camera Placement [PDF] [Copy] [Kimi]

Authors: Zhouxing Su ; Qingyun Zhang ; Zhipeng Lü ; Chu-Min Li ; Weibo Lin ; Fuda Ma

The optimal camera placement problem (OCP) aims to accomplish surveillance tasks with the minimum number of cameras, which is one of the topics in the GECCO 2020 Competition and can be modeled as the unicost set covering problem (USCP). This paper presents a weighting-based variable neighborhood search (WVNS) algorithm for solving OCP. First, it simplifies the problem instances with four reduction rules based on dominance and independence. Then, WVNS converts the simplified OCP into a series of decision unicost set covering subproblems and tackles them with a fast local search procedure featured by a swap-based neighborhood structure. WVNS employs an efficient incremental evaluation technique and further boosts the neighborhood evaluation by exploiting the dominance and independence features among neighborhood moves. Computational experiments on the 69 benchmark instances introduced in the GECCO 2020 Competition on OCP and USCP show that WVNS is extremely competitive comparing to the state-of-the-art methods. It outperforms or matches several best performing competitors on all instances in both the OCP and USCP tracks of the competition, and its advantage on 15 large-scale instances are over 10%. In addition, WVNS improves the previous best known results for 12 classical benchmark instances in the literature.

#23 Multi-Goal Multi-Agent Path Finding via Decoupled and Integrated Goal Vertex Ordering [PDF] [Copy] [Kimi]

Author: Pavel Surynek

We introduce multi-goal multi agent path finding (MG-MAPF) which generalizes the standard discrete multi-agent path finding (MAPF) problem. While the task in MAPF is to navigate agents in an undirected graph from their starting vertices to one individual goal vertex per agent, MG-MAPF assigns each agent multiple goal vertices and the task is to visit each of them at least once. Solving MG-MAPF not only requires finding collision free paths for individual agents but also determining the order of visiting agent's goal vertices so that common objectives like the sum-of-costs are optimized. We suggest two novel algorithms using different paradigms to address MG-MAPF: a heuristic search-based algorithm called Hamiltonian-CBS (HCBS) and a compilation-based algorithm built using the satisfiability modulo theories (SMT), called SMT-Hamiltonian-CBS (SMT-HCBS).

#24 Bayes DistNet - A Robust Neural Network for Algorithm Runtime Distribution Predictions [PDF] [Copy] [Kimi]

Authors: Jake Tuero ; Michael Buro

Randomized algorithms are used in many state-of-the-art solvers for constraint satisfaction problems (CSP) and Boolean satisfiability (SAT) problems. For many of these problems, there is no single solver which will dominate others. Having access to the underlying runtime distributions (RTD) of these solvers can allow for better use of algorithm selection, algorithm portfolios, and restart strategies. Previous state-of-the-art methods directly try to predict a fixed parametric distribution that the input instance follows. In this paper, we extend RTD prediction models into the Bayesian setting for the first time. This new model achieves robust predictive performance in the low observation setting, as well as handling censored observations. This technique also allows for richer representations which cannot be achieved by the classical models which restrict their output representations. Our model outperforms the previous state-of-the-art model in settings in which data is scarce, and can make use of censored data such as lower bound time estimates, where that type of data would otherwise be discarded. It can also quantify its uncertainty in its predictions, allowing for algorithm portfolio models to make better informed decisions about which algorithm to run on a particular instance.

#25 Learning Branching Heuristics for Propositional Model Counting [PDF] [Copy] [Kimi]

Authors: Pashootan Vaezipoor ; Gil Lederman ; Yuhuai Wu ; Chris Maddison ; Roger B Grosse ; Sanjit A. Seshia ; Fahiem Bacchus

Propositional model counting, or #SAT, is the problem of computing the number of satisfying assignments of a Boolean formula. Many problems from different application areas, including many discrete probabilistic inference problems, can be translated into model counting problems to be solved by #SAT solvers. Exact #SAT solvers, however, are often not scalable to industrial size instances. In this paper, we present Neuro#, an approach for learning branching heuristics to improve the performance of exact #SAT solvers on instances from a given family of problems. We experimentally show that our method reduces the step count on similarly distributed held-out instances and generalizes to much larger instances from the same problem family. It is able to achieve these results on a number of different problem families having very different structures. In addition to step count improvements, Neuro# can also achieve orders of magnitude wall-clock speedups over the vanilla solver on larger instances in some problem families, despite the runtime overhead of querying the model.